responsible ai practice
Making Transparency Advocates: An Educational Approach Towards Better Algorithmic Transparency in Practice
Bell, Andrew, Stoyanovich, Julia
Concerns about the risks and harms posed by artificial intelligence (AI) have resulted in significant study into algorithmic transparency, giving rise to a sub-field known as Explainable AI (XAI). Unfortunately, despite a decade of development in XAI, an existential challenge remains: progress in research has not been fully translated into the actual implementation of algorithmic transparency by organizations. In this work, we test an approach for addressing the challenge by creating transparency advocates, or motivated individuals within organizations who drive a ground-up cultural shift towards improved algorithmic transparency. Over several years, we created an open-source educational workshop on algorithmic transparency and advocacy. We delivered the workshop to professionals across two separate domains to improve their algorithmic transparency literacy and willingness to advocate for change. In the weeks following the workshop, participants applied what they learned, such as speaking up for algorithmic transparency at an organization-wide AI strategy meeting. We also make two broader observations: first, advocacy is not a monolith and can be broken down into different levels. Second, individuals' willingness for advocacy is affected by their professional field. For example, news and media professionals may be more likely to advocate for algorithmic transparency than those working at technology start-ups.
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Michigan (0.04)
- North America > United States > Massachusetts > Suffolk County > Boston (0.04)
- (2 more...)
- Research Report (1.00)
- Instructional Material > Course Syllabus & Notes (1.00)
- Media > News (1.00)
- Information Technology (1.00)
- Law > Statutes (0.93)
- (2 more...)
GPT-4 and the Next Frontier of Generative AI – Towards AI
Originally published on Towards AI. This is a follow-up to my part 1 on ChatGPT. GPT-4 has burst onto the scene! Open AI officially released the larger and more powerful successor to GPT-3 with many improvements, including the ability to process images, draft a lawsuit, and handle up to a 25,000-word input.¹ During testing, Open AI reported that it was smart enough to find a solution for solving a CAPTCHA by hiring a human on taskrabbit to do it for GPT-4.²
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.43)
GPT-4 and the Next Frontier of Generative AI
This is a follow-up to my part 1 on ChatGPT. GPT-4 has burst onto the scene! Open AI officially released the larger and more powerful successor to GPT-3 with many improvements, including the ability to process images, draft a lawsuit, and handle up to a 25,000-word input.¹ During testing, Open AI reported that it was smart enough to find a solution for solving a CAPTCHA by hiring a human on taskrabbit to do it for GPT-4.² Yes, you read that correctly.
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (1.00)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.43)
3 ways to center humans in your company's artificial intelligence efforts
ChatGPT, the powerful new artificial intelligence tool from OpenAI that can answer questions, chat with humans, and generate text, has dominated headlines in the past few months. The tool is advanced enough to pass law school exams (though with fairly low scores), but it has also veered into strange conversations and has shared misinformation. It also highlights an important area that companies using or thinking about using AI need to confront: how to embrace AI in a way that doesn't harm humans. "Leadership involves absolutely centering the human and being rigorous before releasing into the wild things that affect these humans," saidRenée Richardson Gosline, a senior lecturer and principal research scientist at MIT Sloan. "Having the courage and ethics to say we want to cultivate a system and a relationship with our customers whereby we don't simply always extract, but we also share value -- that's what leads to loyalty in the long term."
- Law (0.70)
- Education > Educational Setting > Higher Education (0.55)
- Education > Curriculum > Subject-Specific Education (0.55)
- Information Technology > Artificial Intelligence > Natural Language > Large Language Model (0.91)
- Information Technology > Artificial Intelligence > Natural Language > Chatbot (0.73)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning > Generative AI (0.51)
Responsible AI at Amazon Web Services: Q&A with Diya Wynn - The New Stack
Last year's release of ChatGPT alerted many to the great strides that machine learning has made, and will continue tomake in the years going forward. But how do we make surethat this great power is being used responsibly, free from bias and malicious intent? For Amazon Web Services, Diya Wynn is the senior practice manager for Responsible AI. Recently, she sat down with the New Stack to discuss all things Responsible AI. At AWS, Wynn created the customer facing responsible AI practice, and built a team of individuals with diverse backgrounds, including members of the LGBTQIA and differently-abled communities.
- Information Technology > Services (0.61)
- Education > Educational Setting (0.47)
Responsible AI will give you a competitive advantage
Check out all the on-demand sessions from the Intelligent Security Summit here. There is little doubt that AI is changing the business landscape and providing competitive advantages to those that embrace it. It is time, however, to move beyond the simple implementation of AI and to ensure that AI is being done in a safe and ethical manner. This is called responsible AI and will serve not only as a protection against negative consequences, but also as a competitive advantage in and of itself. Responsible AI is a governance framework that covers ethical, legal, safety, privacy, and accountability concerns.
Top 10 open-source Responsible AI toolkits
According to Accenture's 2022 Tech Vision research, only 35% of global consumers trust how organisations implement AI. And 77% think organisations must be held accountable for their misuse of AI. "Responsible AI practice is starting to go mainstream. In fact, Big Tech has large in-house teams and divisions under their Responsible AI practice," said Nikhil Kurhe, co-founder and CEO, of Finarkein Analytics. Responsible AI toolkits can make AI applications and systems fair, robust, and transparent. We have made a list of toolkits and resources to help implement Responsible AI.
Responsible AI will give you a competitive advantage
Did you miss a session from the Future of Work Summit? There is little doubt that AI is changing the business landscape and providing competitive advantages to those that embrace it. It is time, however, to move beyond the simple implementation of AI and to ensure that AI is being done in a safe and ethical manner. This is called responsible AI and will serve not only as a protection against negative consequences, but also as a competitive advantage in and of itself. Responsible AI is a governance framework that covers ethical, legal, safety, privacy, and accountability concerns.
- Law (0.53)
- Information Technology (0.48)
10 AI Predictions For 2022
Prediction #6: Collaboration and investment will all but cease between American and Chinese actors ... [ ] in the field of AI. Language is humanity's most important invention. More than any other attribute, it is the defining hallmark of our species' intelligence. The ability to accurately automate language therefore opens up virtually unbounded opportunities for value creation. The field of natural language processing (NLP) has been upended and turbocharged in the past few years by a foundational new technology known as transformers, first introduced by Google researchers in a 2017 paper.
- North America > Canada > Ontario > Toronto (0.08)
- North America > United States > New York (0.05)
- Asia > China > Beijing > Beijing (0.05)
- Information Technology > Services (1.00)
- Government (1.00)
- Banking & Finance (0.96)
- Leisure & Entertainment (0.95)
Building better startups with responsible AI – TechCrunch
Founders tend to think responsible AI practices are challenging to implement and may slow the progress of their business. They often jump to mature examples like Salesforce's Office of Ethical and Humane Use and think that the only way to avoid creating a harmful product is building a big team. The truth is much simpler. I set out to learn how founders were thinking about responsible AI practices on the ground by speaking with a handful of successful early-stage founders and found many of them were implementing responsible AI practices. They just call it "good business."